66 research outputs found

    High-quality Panorama Stitching based on Asymmetric Bidirectional Optical Flow

    Full text link
    In this paper, we propose a panorama stitching algorithm based on asymmetric bidirectional optical flow. This algorithm expects multiple photos captured by fisheye lens cameras as input, and then, through the proposed algorithm, these photos can be merged into a high-quality 360-degree spherical panoramic image. For photos taken from a distant perspective, the parallax among them is relatively small, and the obtained panoramic image can be nearly seamless and undistorted. For photos taken from a close perspective or with a relatively large parallax, a seamless though partially distorted panoramic image can also be obtained. Besides, with the help of Graphics Processing Unit (GPU), this algorithm can complete the whole stitching process at a very fast speed: typically, it only takes less than 30s to obtain a panoramic image of 9000-by-4000 pixels, which means our panorama stitching algorithm is of high value in many real-time applications. Our code is available at https://github.com/MungoMeng/Panorama-OpticalFlow.Comment: Published at the 5th International Conference on Computational Intelligence and Applications (ICCIA 2020

    Spiking Inception Module for Multi-layer Unsupervised Spiking Neural Networks

    Full text link
    Spiking Neural Network (SNN), as a brain-inspired approach, is attracting attention due to its potential to produce ultra-high-energy-efficient hardware. Competitive learning based on Spike-Timing-Dependent Plasticity (STDP) is a popular method to train an unsupervised SNN. However, previous unsupervised SNNs trained through this method are limited to a shallow network with only one learnable layer and cannot achieve satisfactory results when compared with multi-layer SNNs. In this paper, we eased this limitation by: 1)We proposed a Spiking Inception (Sp-Inception) module, inspired by the Inception module in the Artificial Neural Network (ANN) literature. This module is trained through STDP-based competitive learning and outperforms the baseline modules on learning capability, learning efficiency, and robustness. 2)We proposed a Pooling-Reshape-Activate (PRA) layer to make the Sp-Inception module stackable. 3)We stacked multiple Sp-Inception modules to construct multi-layer SNNs. Our algorithm outperforms the baseline algorithms on the hand-written digit classification task, and reaches state-of-the-art results on the MNIST dataset among the existing unsupervised SNNs.Comment: Published at the 2020 International Joint Conference on Neural Networks (IJCNN); Extended from arXiv:2001.0168

    Merging-Diverging Hybrid Transformer Networks for Survival Prediction in Head and Neck Cancer

    Full text link
    Survival prediction is crucial for cancer patients as it provides early prognostic information for treatment planning. Recently, deep survival models based on deep learning and medical images have shown promising performance for survival prediction. However, existing deep survival models are not well developed in utilizing multi-modality images (e.g., PET-CT) and in extracting region-specific information (e.g., the prognostic information in Primary Tumor (PT) and Metastatic Lymph Node (MLN) regions). In view of this, we propose a merging-diverging learning framework for survival prediction from multi-modality images. This framework has a merging encoder to fuse multi-modality information and a diverging decoder to extract region-specific information. In the merging encoder, we propose a Hybrid Parallel Cross-Attention (HPCA) block to effectively fuse multi-modality features via parallel convolutional layers and cross-attention transformers. In the diverging decoder, we propose a Region-specific Attention Gate (RAG) block to screen out the features related to lesion regions. Our framework is demonstrated on survival prediction from PET-CT images in Head and Neck (H&N) cancer, by designing an X-shape merging-diverging hybrid transformer network (named XSurv). Our XSurv combines the complementary information in PET and CT images and extracts the region-specific prognostic information in PT and MLN regions. Extensive experiments on the public dataset of HEad and neCK TumOR segmentation and outcome prediction challenge (HECKTOR 2022) demonstrate that our XSurv outperforms state-of-the-art survival prediction methods.Comment: Early Accepted at International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023

    AutoFuse: Automatic Fusion Networks for Deformable Medical Image Registration

    Full text link
    Deformable image registration aims to find a dense non-linear spatial correspondence between a pair of images, which is a crucial step for many medical tasks such as tumor growth monitoring and population analysis. Recently, Deep Neural Networks (DNNs) have been widely recognized for their ability to perform fast end-to-end registration. However, DNN-based registration needs to explore the spatial information of each image and fuse this information to characterize spatial correspondence. This raises an essential question: what is the optimal fusion strategy to characterize spatial correspondence? Existing fusion strategies (e.g., early fusion, late fusion) were empirically designed to fuse information by manually defined prior knowledge, which inevitably constrains the registration performance within the limits of empirical designs. In this study, we depart from existing empirically-designed fusion strategies and develop a data-driven fusion strategy for deformable image registration. To achieve this, we propose an Automatic Fusion network (AutoFuse) that provides flexibility to fuse information at many potential locations within the network. A Fusion Gate (FG) module is also proposed to control how to fuse information at each potential network location based on training data. Our AutoFuse can automatically optimize its fusion strategy during training and can be generalizable to both unsupervised registration (without any labels) and semi-supervised registration (with weak labels provided for partial training data). Extensive experiments on two well-benchmarked medical registration tasks (inter- and intra-patient registration) with eight public datasets show that our AutoFuse outperforms state-of-the-art unsupervised and semi-supervised registration methods.Comment: Under Revie

    Non-iterative Coarse-to-fine Transformer Networks for Joint Affine and Deformable Image Registration

    Full text link
    Image registration is a fundamental requirement for medical image analysis. Deep registration methods based on deep learning have been widely recognized for their capabilities to perform fast end-to-end registration. Many deep registration methods achieved state-of-the-art performance by performing coarse-to-fine registration, where multiple registration steps were iterated with cascaded networks. Recently, Non-Iterative Coarse-to-finE (NICE) registration methods have been proposed to perform coarse-to-fine registration in a single network and showed advantages in both registration accuracy and runtime. However, existing NICE registration methods mainly focus on deformable registration, while affine registration, a common prerequisite, is still reliant on time-consuming traditional optimization-based methods or extra affine registration networks. In addition, existing NICE registration methods are limited by the intrinsic locality of convolution operations. Transformers may address this limitation for their capabilities to capture long-range dependency, but the benefits of using transformers for NICE registration have not been explored. In this study, we propose a Non-Iterative Coarse-to-finE Transformer network (NICE-Trans) for image registration. Our NICE-Trans is the first deep registration method that (i) performs joint affine and deformable coarse-to-fine registration within a single network, and (ii) embeds transformers into a NICE registration framework to model long-range relevance between images. Extensive experiments with seven public datasets show that our NICE-Trans outperforms state-of-the-art registration methods on both registration accuracy and runtime.Comment: Accepted at International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023

    DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients with Advanced Nasopharyngeal Carcinoma using Pretreatment PET/CT

    Full text link
    Nasopharyngeal Carcinoma (NPC) is a malignant epithelial cancer arising from the nasopharynx. Survival prediction is a major concern for NPC patients, as it provides early prognostic information to plan treatments. Recently, deep survival models based on deep learning have demonstrated the potential to outperform traditional radiomics-based survival prediction models. Deep survival models usually use image patches covering the whole target regions (e.g., nasopharynx for NPC) or containing only segmented tumor regions as the input. However, the models using the whole target regions will also include non-relevant background information, while the models using segmented tumor regions will disregard potentially prognostic information existing out of primary tumors (e.g., local lymph node metastasis and adjacent tissue invasion). In this study, we propose a 3D end-to-end Deep Multi-Task Survival model (DeepMTS) for joint survival prediction and tumor segmentation in advanced NPC from pretreatment PET/CT. Our novelty is the introduction of a hard-sharing segmentation backbone to guide the extraction of local features related to the primary tumors, which reduces the interference from non-relevant background information. In addition, we also introduce a cascaded survival network to capture the prognostic information existing out of primary tumors and further leverage the global tumor information (e.g., tumor size, shape, and locations) derived from the segmentation backbone. Our experiments with two clinical datasets demonstrate that our DeepMTS can consistently outperform traditional radiomics-based survival prediction models and existing deep survival models.Comment: Accepted at IEEE Journal of Biomedical and Health Informatics (JBHI

    Intrinsic Cerebro-Cerebellar Functional Connectivity Reveals the Function of Cerebellum VI in Reading-Related Skills

    Get PDF
    Funding This work was supported by grants from the National Natural Science Foundation of China (NSFC: 31971036, 31971039, and 31571158).Peer reviewedPublisher PD

    Effect of 60Co-γ Irradiation on Postharvest Physiology and Lipid Nutrition of Fresh Hazelnuts

    Get PDF
    Fresh hazelnuts were treated by 60Co-γ irradiation (0, 0.25, 0.50, 0.75 and 1.00 kGy) and stored at (4.0 ± 0.5) ℃ for up to three months. Changes in physiological indexes and lipid nutrition were monitored during the storage period. The results showed that irradiation at a dose of 0.25–1.00 kGy delayed the decline in superoxide dismutase (SOD) and catalase (CAT) activity, and reduced the activity of polyphenol oxidase (PPO) in fresh hazelnuts. The irradiation dose of 0.50 kGy was the most effective, and at the end of storage, the respiratory intensity, malondialdehyde (MDA) content and lipoxygenase activity of the irradiated sample decreased by 25.81%, 18.50% and 4.18% compared with those of the non-irradiated one, respectively. However, irradiation had no significant effect on fatty acid composition and content, peroxide value (POV) or acid value (AV). Principal component analysis (PCA) performed on fresh hazelnuts stored for 90 d also showed that 0.50 kGy 60Co-γ irradiation imparted the best storage quality to fresh hazelnuts. These results suggest that 60Co-γ irradiation can delay the senescence and effectively extend the shelf life by affecting the postharvest physiology of fresh hazelnuts

    Prediction of 5-year progression-free survival in advanced nasopharyngeal carcinoma with pretreatment PET/CT using multi-modality deep learning-based radiomics

    Get PDF
    ObjectiveDeep learning-based radiomics (DLR) has achieved great success in medical image analysis and has been considered a replacement for conventional radiomics that relies on handcrafted features. In this study, we aimed to explore the capability of DLR for the prediction of 5-year progression-free survival (PFS) in advanced nasopharyngeal carcinoma (NPC) using pretreatment PET/CT images.MethodsA total of 257 patients (170/87 patients in internal/external cohorts) with advanced NPC (TNM stage III or IVa) were enrolled. We developed an end-to-end multi-modality DLR model, in which a 3D convolutional neural network was optimized to extract deep features from pretreatment PET/CT images and predict the probability of 5-year PFS. The TNM stage, as a high-level clinical feature, could be integrated into our DLR model to further improve the prognostic performance. For a comparison between conventional radiomics and DLR, 1,456 handcrafted features were extracted, and optimal conventional radiomics methods were selected from 54 cross-combinations of six feature selection methods and nine classification methods. In addition, risk group stratification was performed with clinical signature, conventional radiomics signature, and DLR signature.ResultsOur multi-modality DLR model using both PET and CT achieved higher prognostic performance (area under the receiver operating characteristic curve (AUC) = 0.842 ± 0.034 and 0.823 ± 0.012 for the internal and external cohorts) than the optimal conventional radiomics method (AUC = 0.796 ± 0.033 and 0.782 ± 0.012). Furthermore, the multi-modality DLR model outperformed single-modality DLR models using only PET (AUC = 0.818 ± 0.029 and 0.796 ± 0.009) or only CT (AUC = 0.657 ± 0.055 and 0.645 ± 0.021). For risk group stratification, the conventional radiomics signature and DLR signature enabled significant difference between the high- and low-risk patient groups in both the internal and external cohorts (p < 0.001), while the clinical signature failed in the external cohort (p = 0.177).ConclusionOur study identified potential prognostic tools for survival prediction in advanced NPC, which suggests that DLR could provide complementary values to the current TNM staging

    Responses of sequential and hierarchical phenological events to warming and cooling in alpine meadows

    Get PDF
    Organisms' life cycles consist of hierarchical stages, from a single phenological stage (for example, flowering within a season), to vegetative and reproductive phases, to the total lifespan of the individual. Yet phenological events are typically studied in isolation, limiting our understanding of life history responses to climate change. Here, we reciprocally transfer plant communities along an elevation gradient to investigate plastic changes in the duration of sequential phenological events for six alpine species. We show that prolonged flowering leads to longer reproductive phases and activity periods when plants are moved to warmer locations. In contrast, shorter post-fruiting leaf and flowering stages led to shorter vegetative and reproductive phases, respectively, which resulted in shorter activity periods when plants were moved to cooler conditions. Therefore, phenological responses to warming and cooling do not simply mirror one another in the opposite direction, and low temperature may limit reproductive allocation in the alpine region
    • …
    corecore